One of the key goals of collaborative robotics is sharing a workspace between robot and a human operator. The shared workspace is used by both humans and robots to complete a common task – either in parallel or as a part of active collaboration. In both cases, the robot should not require separation from the human operator by a protective barrier (e.g., a metal cage). An important aspect of a collaborative robot is the understanding of the scene (i.e. where is the human and what and where are the objects on the scene/workspace) and the ability to react to the immediate needs of the operator (e.g., need for a specific tool). Additionally, the robot should be able to predict the future steps in the production process.
Key Developments:
- Design and development of a device capable of scanning and reconstructing objects on the scene and geometrical relationships between the objects. Similar devices exist on the market, however, most of them of insufficient accuracy.
- Research and development of tools for efficient bi-directional communication with a human operator. The tools should allow humans to easily describe the performed task or issue orders to robots using natural language. The robot should be able to communicate its state or intents (e.g. confirmation of orders). The communication interface should be easy to learn and use.
- Development of a system capable of storing and interconnecting data acquired from individual sensors or other channels (visual, tactile, language, etc.). The system should also be capable of providing predictions about future states of the world (bounded by the workspace) and estimating the needs of the operator in advance.
- Design a workspace with a collaborative robot. The workspace should support efficient collaboration between the human operator and the robot.
Testing Scenarios:
- Simple Commands: Robot fetches objects based on simple commands (e.g., “Pass me a screwdriver”).
- Complex Commands: Robot fetches objects based on detailed instructions (e.g., “Take the Phillips screwdriver next to the blue box and place it onto the table”).
- Action Recognition: Robot reacts to human actions and anticipates needs (e.g., the human operator picking up a screw, which is recognized by the robot, thus the robot fetches a screwdriver).
- Task Prediction: Robot predicts and prepares for the next steps in complex tasks (e.g., the robot recognizes that the human is assembling a gearbox and will require a screwdriver next; the robot will find and fetch the screwdriver and prepare it so that the operator can use it quickly).
These developments aim to create seamless and intuitive interactions between human operators and robots, enhancing productivity and safety in collaborative environments.
Extensions
More advanced object perception and grasping functionalities have been added throughout 2024. The object point clouds are preprocessed and grasp configurations are proposed taking the complete 3D shape into account.
This implementation draws on the collaboration with the group of Matěj Hoffmann (FEE, CTU) and the solutions developed in the context of the project IPALM (TAČR, EPSILON, nr. TH05020001).